Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: lk: Address memory aliasing issue #265

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

vishals4gh
Copy link
Contributor

@vishals4gh vishals4gh commented Jul 21, 2020

By default most of the platforms map all the DRAM
at boot time causing scenarios where same physical
address initially mapped with cacheable attributes
maybe allowed to get mapped with non-cacheable
attributes via different VM range. This is discouraged
in architectures like ARM and gets tedious to ensure
the coherent view of DRAM with other masters.

This change ensures following for qemu-arm platform:

  1. Add support of Arenas which might not be mapped
    in kernel space initially after platform is setup.
  2. All the malloc calls use the pages
    from the arenas which are already mapped to kernel
    space.
  3. vmm_alloc* APIs use the arenas which are not already
    mapped to any virtual address range.
  4. vmm_free_region for memory allocated via vmm_alloc*
    APIS using cacheable memory mappings will clean caches
    for reuse by the next vmm_alloc* API call that can map memory
    with different attributes.
  5. Memory for unmapped arena is initially allowed to be mapped
    and then unmapped later during platform initialization.

This avoids remapping of the same physical memory to
different virtual address ranges with different memory
attributes. This effectively ensures that at any given
memory from the vmm_arena can be owned by singal entity
with a particular memory attributes.

Caveats:

  1. paddr_to_kvaddr API will not work for Physical addresses
    allocated using vmm_alloc* APIs

ToDo:

  1. Address the shortfalls of the current implementation
  2. Update other platforms to allow unmapped ram arenas if
    this implementation is ok to pursue.

Signed-off-by: vannapurve [email protected]

By default most of the platforms map all the DRAM
at boot time causing scenarios where same physical
address initially mapped with cacheable attributes
maybe allowed to get mapped with non-cacheable
attributes via different VM range. This is discouraged
in architectures like ARM and gets tedious to ensure
the coherent view of DRAM with other masters.

This change ensures following for qemu-arm platform:
1) Add support of Arenas which might not be mapped
in kernel space initially after platform is setup.
2) All the malloc calls use the pages
from the arenas which are already mapped to kernel
space.
3) vmm_alloc* APIs use the arenas which are not already
mapped to any virtual address range.
4) vmm_free_region for memory allocated via vmm_alloc*
APIS using cacheable memory mappings will clean caches
for reuse by the next vmm_alloc* API call that can map memory
with different attributes.
5) Memory for unmapped arena is initially allowed to be mapped
and then unmapped later during platform initialization.

This avoids remapping of the same physical memory to
different virtual address ranges with different memory
attributes. This effectively ensures that at any given
memory from the vmm_arena can be owned by singal entity
with a particular memory attributes.

Caveats:
1) paddr_to_kvaddr API will not work for Physical addresses
allocated using vmm_alloc* APIs

ToDo:
1) Address the shortfalls of the current implementation
2) Update other platforms to allow unmapped ram arenas if
this implementation is ok to pursue.
@vishals4gh
Copy link
Contributor Author

Rather than a merge request, this is more of review request to seek more feedback about whether:

  1. This issue is something that should be addressed
  2. The approach I tried to pursue is fine to proceed with
    This patch is still WIP.

Thanks,
Vishal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant